268 research outputs found

    Perceiving self-motion in depth: the role of stereoscopic motion and changing-size cues

    Get PDF
    During self-motions different patterns of optic flow are presented to the left and right eyes. Previous research has, however, focussed mainly on the self-motion information contained in a single pattern of optic flow. The current studies investigated the role that binocular disparity plays in the visual perception of self-motion, showing that the addition of stereoscopic cues to optic flow significantly improves forwards linear vection in central vision. Improvements were also achieved by adding changing-size cues to sparse (but not dense) flow patterns. These findings showed that assumptions in the heading literature that stereoscopic cues only facilitate self-motion when the optic flow has ambiguous depth ordering, do not apply to vection. Rather, it was concluded that both stereoscopic and changing-size cues provide additional motion in depth information which is used in perceiving self-motion

    Vection in depth during treadmill walking

    Get PDF
    Vection has typically been induced in stationary observers (ie conditions providing visual-only information about self-motion). Two recent studies have examined vection during active treadmill walking--one reported that treadmill walking in the same direction as the visually simulated self-motion impaired vection (Onimaru et al, 2010 Journal of Vision 10(7):860), the other reported that it enhanced vection (Seno et al, 2011 Perception 40 747-750; Seno et al, 2011 Attention, Perception, & Psychophysics 73 1467-1476). Our study expands on these earlier investigations of vection during observer active movement. In experiment 1 we presented radially expanding optic flow and compared the vection produced in stationary observers with that produced during walking forward on a treadmill at a 'matched' speed. Experiment 2 compared the vection induced by forward treadmill walking while viewing expanding or contracting optic flow with that induced by viewing playbacks of these same displays while stationary. In both experiments subjects' tracked head movements were either incorporated into the self-motion displays (as simulated viewpoint jitter) or simply ignored. We found that treadmill walking always reduced vection (compared with stationary viewing conditions) and that simulated viewpoint jitter always increased vection (compared with constant velocity displays). These findings suggest that while consistent visual-vestibular information about self-acceleration increases vection, biomechanical self-motion information reduces this experience (irrespective of whether it is consistent or not with the visual input)

    The search for instantaneous vection: An oscillating visual prime reduces vection onset latency

    Get PDF
    2018 Palmisano, Riecke. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Typically it takes up to 10 seconds or more to induce a visual illusion of self-motion ( vection ). However, for this vection to be most useful in virtual reality and vehicle simulation, it needs to be induced quickly, if not immediately. This study examined whether vection onset latency could be reduced towards zero using visual display manipulations alone. In the main experiments, visual self-motion simulations were presented to observers via either a large external display or a head-mounted display (HMD). Priming observers with visually simulated viewpoint oscillation for just ten seconds before the main self-motion display was found to markedly reduce vection onset latencies (and also increase ratings of vection strength) in both experiments. As in earlier studies, incorporating this simulated viewpoint oscillation into the self-motion displays themselves was also found to improve vection. Average onset latencies were reduced from 8-9s in the no oscillating control condition to as little as 4.6 s (for external displays) or 1.7 s (for HMDs) in the combined oscillation condition (when both the visual prime and the main self-motion display were oscillating). As these display manipulations did not appear to increase the likelihood or severity of motion sickness in the current study, they could possibly be used to enhance computer generated simulation experiences and training in the future, at no additional cost

    Effects of gaze on vection from jittering, oscillating, and purely radial optic flow

    Get PDF
    In this study, we examined the effects of different gaze types (stationary fixation, directed looking, or gaze shifting) and gaze eccentricities (central or peripheral) on the vection induced by jittering, oscillating, and purely radial optic flow. Contrary to proposals of eccentricity independence for vection (e.g., Post, 1988), we found that peripheral directed looking improved vection and peripheral stationary fixation impaired vection induced by purely radial flow (relative to central gaze). Adding simulated horizontal or vertical viewpoint oscillation to radial flow always improved vection, irrespective of whether instructions were to fixate, or look at, the center or periphery of the self-motion display. However, adding simulated high-frequency horizontal or vertical viewpoint jitter was found to increase vection only when central gaze was maintained. In a second experiment, we showed that alternating gaze between the center and periphery of the display also improved vection (relative to stable central gaze), with greater benefits observed for purely radial flow than for horizontally or vertically oscillating radial flow. These results suggest that retinal slip plays an important role in determining the time course and strength of vection. We conclude that how and where one looks in a self-motion display can significantly alter vection by changing the degree of retinal slip

    View specific generalisation effects in face recognition: Front and yaw comparison views are better than pitch

    Get PDF
    It can be difficult to recognise new instances of an unfamiliar face. Recognition errors in this particular situation appear to be viewpoint dependent with error rates increasing with the angular distance between the face views. Studies using front views for comparison have shown that recognising faces rotated in yaw can be difficult and that recognition of faces rotated in pitch is more challenging still. Here we investigate the extent to which viewpoint dependent face recognition depends on the comparison view. Participants were assigned to one of four different comparison view groups: front, ¾ yaw (right), ¾ pitch-up (above) or ¾ pitch-down (below). On each trial, participants matched their particular comparison view to a range of yaw or pitch rotated test views. Results showed that groups with a front or ¾ yaw comparison view had superior overall performance and more successful generalisation to a broader range of both pitch and yaw test views compared to groups with pitch-up or pitch-down comparison views, both of which had a very restricted generalisation range. Regression analyses revealed the importance of image similarity between views for generalisation, with a lesser role for 3D face depth. These findings are consistent with a view interpolation solution to view generalisation of face recognition, with front and ¾ yaw views being most informative

    Vection in depth during consistent and inconsistent multisensory stimulation

    Get PDF
    We examined vection induced during physical or simulated head oscillation along either the horizontal or depth axis. In the first two experiments, during active conditions, subjects viewed radial-flow displays which simulated viewpoint oscillation that was either in-phase or out-of-phase with their own tracked head movements. In passive conditions, stationary subjects viewed playbacks of displays generated in earlier active conditions. A third control, experiment was also conducted where physical and simulated fore ^ aft oscillation was added to a lamellar flow display. Consistent with ecology, when active in-phase horizontal oscillation was added to a radial-flow display it modestly improved vection compared to active out-of-phase and passive conditions. However, when active fore ^ aft head movements were added to either a radial-flow or a lamellar-flow display, both in-phase and out-of-phase conditions produced very similar vection. Our research shows that consistent multisensory input can enhance the visual perception of self- motion in some situations. However, it is clear that multisensory stimulation does not have to be consistent (ie ecological) to generate compelling vection in depth

    Relative visual oscillation can facilitate visually induced self-motion perception

    Get PDF
    Adding simulated viewpoint jitter or oscillation to displays enhances visually induced illusions of self-motion (vection). The cause of this enhancement is yet to be fully understood. Here, we conducted psychophysical experiments to investigate the effects of different types of simulated oscillation on vertical vection. Observers viewed horizontally oscillating and nonoscillating optic flow fields simulating downward self-motion through an aperture. The aperture was visually simulated to be nearer to the observer and was stationary or oscillating in-phase or counter-phase to the direction of background horizontal oscillations of optic flow. Results showed that vection strength was modulated by the oscillation of the aperture relative to the background optic flow. Vertical vection strength increased as the relative oscillatory horizontal motion between the flow and the aperture increased. However, such increases in vection were only generated when the added oscillations were orthogonal to the principal direction of the optic flow pattern, and not when they occurred in the same direction. The oscillation effects observed in this investigation could not be explained by motion adaptation or different (motion parallax based) effects on depth perception. Instead, these results suggest that the oscillation advantage for vection depends on relative visual motion

    Glideslope perception during aircraft landing

    Get PDF
    Ideally, when a pilot approaches a runway on their final approach for landing, they must maintain a constant trajectory, or glideslope, of typically 3°-4°. If pilots misperceive their glideslope and alter their flight path accordingly, they are likely to overshoot or undershoot their desired touch down point on the runway. This experiment examined the accuracy of passive glideslope perceptions during simulated fixed-wing aircraft landings. 17 university students were repeatedly exposed to the following four landing scene conditions: (i) a daylight scene of a runway surrounded by buildings and lying on a 100 km deep texture mapped ground plane; (ii) a night scene with only the side runway lights visible; (iii) a night scene with the side, center, near end and far end runway lights visible and a visible horizon line; or (iv) a night scene with a runway outline (instead of discrete lights) and a visible horizon line. Each of these simulations lasted 2 seconds and represented a 130 km/hr landing approach towards a 30 m wide x 1000 m long runway with a glideslope ranging between 1° and 5°. On each experimental trial, participants viewed two simulated aircraft landings (one presented directly after the other): (a) an ideal 3° glideslope landing simulation; and (b) a comparison landing simulation, where the glideslope was either 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, or 5°. Participants simply judged which of the two landing simulations appeared to have the steepest glideslope. As expected, the daylight landing scene simulations were found to produce significantly more accurate glideslope judgments than any of the night landing simulations. However, performance was found to be unacceptably imprecise and biased for all of our landing simulation scenes. Even in daylight conditions, the smallest glideslope difference that could be reliably detected (i.e. resulted in 75% correct levels of performance) exceeded 2º for 11 of our 16 subjects. It is concluded that glideslope differences of up to 2° can not be accurately perceived based on visual information alone, regardless of scene lighting or detail. The additional visual information provided by the ground surface and buildings in the daytime significantly improved performance, however not to a level that would prevent landing incidents

    Effect of decorrelation on 3-D grating detection with static and dynamic random-dot stereograms

    Get PDF
    Three experiments examined the effects of image decorrelation on the stereoscopic detection of sinusoidal depth gratings in static anddynamic random-dot stereograms (RDS). Detection was found to tolerate greater levels of image decorrelation as: (i) density increasedfrom 23 to 676 dots/deg2; (ii) spatial frequency decreased from 0.88 to 0.22 cpd; (iii) amplitude increased above 0.5 arcmin; and (iv) dotlifetime decreased from 1.6 s (static RDS) to 80 ms (dynamic RDS). In each case, the specific pattern of tolerance to decorrelation couldbe explained by its consequences for image sampling, filtering, and the influence of depth noise
    • …
    corecore